12 research outputs found

    Statistical Engineering: A Causal-Stochastic Modeling Research Update

    Get PDF
    In the ASEM-IAC 2012, Cotter (2012) summarized prior works that led to the proposal for statistical engineering, identified the gaps in knowledge that statistical engineering needs to address, explored additional gaps in knowledge not addressed in the prior works, set forth a working definition of and body of knowledge for statistical engineering, and set forth proposals of potential systems contributions the Engineering Management profession could make toward the development of statistical engineering. In 2014, the ASQ Statistics Division, DOT&E, NASA, and IDA co-sponsored a Statistical Engineering Agreement to jointly research development of the discipline of statistical engineering. The statistics community has continued to frame statistical engineering within the context of the general linear model (GLM). However, incorporating deterministic engineering causal models within the GLM framework leaves missing links of conditional dependencies, yields models that are difficult to fit or that may not converge to a unique solution, and may not increase the understanding of physical causal processes in dynamic stochastic systems. Integration of engineering specific deterministic causal models within stochastic models to provide additional knowledge of the risk of variance from expected response is a key gap in knowledge that must be addressed to realize Statistical Engineering as a discipline. This paper updates research into integrating deterministic engineering models as system dynamic causal components of functional causal Bayesian networks within a state-space framework to model joint deterministic-stochastic dynamic causal effects

    Engineering Analytics: Research into the Governance Structure Needed to Integrate the Dominant Design Methodologies

    Get PDF
    In the ASEM-IAC 2014, Cotter (2014) explored the current state of engineering design, identified the dominate approaches to engineering design, discussed potential contributions from the new field of data analytics to engineering design, and proposed an Engineering Analytics framework that integrates the dominate engineering design approaches and data analytics within a human-intelligence/machine-intelligence (HI-MI) design architecture. This paper reports research applying ontological engineering to integrate the dominate engineering design methodologies into a systemic engineering design decision governance architecture

    A Systems Methodology for Measuring Operational Organization Effectiveness: A Study of the Original Equipment Computer Manufacturing Industry, 1948 to 2001

    Get PDF
    Optimizing operational organizational effectiveness is the central, although often unstated, goal of engineering management and systems engineering research and applications. Two fundamental problems remain to be addressed in pursuit of this goal. First, despite over fifty years of research in various disciplines, there is still no universally accepted definition of organizational effectiveness. Second, no methodology exists to identify the domains, dimensions, and determinants of operational organizational effectiveness and dynamically model operational organizational effectiveness within a given population. This research synthesizes a systems engineering methodology for identifying the domains, dimensions, and determinants of and dynamically modeling operational organizational effectiveness for an identified population. First, the methodology takes the concept of the niche from ecological theory as its definition of effectiveness. Specifically, an organization that is able to sustain a real nonnegative growth rate in its niche dimension under a set of competitive conditions is defined as being effective. Next, the methodology integrates organizational ecology and open systems theories, principles, and models into a unified systemic model of environmental and organizational domains and dimensions that provide the structure for research into the determinants of organizational effectiveness. Based on this model, the methodology gathers observable data on hypothesized determinants of effectiveness and applies event history survival and effectiveness analyses to identify the statistically significant determinants. The methodology\u27s final two steps are to construct and validate a dynamic simulation model of organizational effectiveness based on the identified determinants and to perform sensitivity analyses

    A Proposed Taxonomy for the Systems Statistical Engineering Body of Knowledge

    Get PDF
    In the ASEM-IAC 2012, Cotter (2012) identified the gaps in knowledge that statistical engineering needs to address, explored additional gaps in knowledge not addressed in the prior works, and set forth a working definition of and body of knowledge for statistical engineering. In the ASEM-IAC 2015, Cotter (2015) proposed a systemic causal Bayesian hierarchical model that addressed the knowledge gap needed to integrate deterministic mathematical engineering causal models within a stochastic framework. Missing, however, is the framework for specifying the hierarchical qualitative systems structures necessary and sufficient for specifying systemic causal Bayesian hierarchical models. In the ASEM-IAC 2016, Cotter (2016) specified the modeling methodology through which statistical engineering models could be developed, diagnosed, and applied to predict systemic mission performance. In the last research update, Cotter (2017) proposed revisions to and integration of IDEF0 as the framework for developing hierarchical qualitative systems models. In that work, Cotter noted that a hierarchical causal Bayesian socio-technical modeling body of knowledge was yet to be developed, validated, and peer reviewed. This paper reports research into development of a core taxonomy for the systems statistical engineering causal Bayesian socio-technical modeling body of knowledge

    Integrating IDEF0 into a Systems Framework for Statistical Engineering

    Get PDF
    Driven by a growing requirement during the 21st century for the integration of rigorous statistical analyses in engineering research, there has been a movement within the statistics and quality communities to evolve a unified statistical engineering body of knowledge (Horel and Snee, 2010; Anderson-Cook, 2012). Outside of the 2014 Statistical Engineering Agreement among the ASQ Statistics Division, DOT&E, NASA, and IDA, there has been little formal progress toward this goal since the May 2011 NASA Symposium on Statistical Engineering in Williamsburg Virginia. In the ASEM-IAC 2012, Cotter (2012) identified the gaps in knowledge that statistical engineering needs to address, explored additional gaps in knowledge not addressed in the prior works, and set forth a working definition of and body of knowledge for statistical engineering. Again in the ASEM-IAC 2015, Cotter (2015) proposed a systemic causal Bayesian hierarchical model that addressed the knowledge gap needed to integrate deterministic mathematical engineering causal models within a stochastic framework. Missing, however, is the framework for specifying the hierarchical qualitative systems structures necessary and sufficient for specifying systemic causal Bayesian hierarchical models. This paper proposes revisions to and integration of IDEF0 as the framework for developing hierarchical qualitative systems models

    Research Agenda into Human-Intelligence/Machine-Intelligence Governance

    Get PDF
    Since the birth of modern artificial intelligence (AI) at the 1956 Dartmouth Conference, the AI community has pursued modeling and coding of human intelligence into AI reasoning processes (HI Þ MI). The Dartmouth Conference\u27s fundamental assertion was that every aspect of human learning and intelligence could be so precisely described that it could be simulated in AI. With the exception of knowledge specific areas (such as IBM\u27s Big Blue and a few others), sixty years later the AI community is not close to coding global human intelligence into AI. In parallel, the knowledge management (KM) community has pursued understanding of organizational knowledge creation, transfer, and management (HI Þ HI) over the last 40 years. Knowledge management evolved into an organized discipline in the early 1990\u27s through formal university courses and creation of the first chief knowledge officer organizational positions. Correspondingly, over the last 25 years there has been growing research into the transfer of intelligence and cooperation among computing systems and automated machines (MI Þ MI). In stark contrast to the AI community effort, there has been little research into transferring AI knowledge and machine intelligence into human intelligence (MI Þ HI) with a goal of improving human decision making. Most important, there has been no research into human-intelligence/machine-intelligence decision governance; that is, the policies and processes governing human-machine decision making toward systemic mission accomplishment. To address this gap, this paper reports on a research initiative and framework toward developing an HI-MI decision governance body of knowledge and discipline

    A Hierarchical Statistical Engineering Modeling Methodology

    Get PDF
    In the ASEM-IAC 2015, Cotter (2015) proposed a systemic joint deterministic-stochastic dynamic causal Bayesian statistical engineering model that addressed the knowledge gap needed to integrate deterministic mathematical engineering models within a stochastic framework. However, Cotter did not specify the modeling methodology through which statistical engineering models could be developed, diagnosed, and applied to predict systemic mission performance. This paper updates research into the development a hierarchical statistical engineering modeling methodology and sets forth the initial theoretical foundation for the methodology

    Human-Intelligence/Machine-Intelligence Decision Governance: An Analysis from Ontological Point of View

    Get PDF
    The increasing CPU power and memory capacity of computers, and now computing appliances, in the 21st century has allowed accelerated integration of artificial intelligence (AI) into organizational processes and everyday life. Artificial intelligence can now be found in a wide range of organizational processes including medical diagnosis, automated stock trading, integrated robotic production systems, telecommunications routing systems, and automobile fuzzy logic controllers. Self-driving automobiles are just the latest extension of AI. This thrust of AI into organizations and everyday life rests on the AI community’s unstated assumption that “…every aspect of human learning and intelligence could be so precisely described that it could be simulated in AI. With the exception of knowledge specific areas …, sixty years later the AI community is not close to coding global human intelligence into AI.” (Cotter, 2015). Thus, in complex mission-environment situations it is therefore still debatable whether and when human or machine decision capacity should govern or when a joint human-intelligence/machine-intelligence (HI-MI) decision capacity is required. Most important, there has been no research into the governance and management of human-intelligent/machine-intelligent decision processes. To address this gap, research has been initiated into an HIMI decision governance body of knowledge and discipline. This paper updates progress in one track of that research, specifically into establishing the ontological basis of HI-MI decision governance, which will form the theoretical foundation of a systemic HI-MI decision governance body of knowledge

    Research Agenda in Developing Core Reference Ontology for Human Intelligence/Machine-Intelligence Electronic Medical Records System

    Get PDF
    Beginning around 1990, efforts were initiated in the medical profession by the U.S. government to transition from paper based medical records to electronic medical records (EMR). By the late 1990s, EMR implementation had already encountered multiple barriers and failures. Then President Bush set forth the goal of implementing electronic health records (EHRs), nationwide within ten years. Again, progress toward EMR implementation was not realized. President Obama put new emphasis on promoting EMR and health care technology. The renewed emphasis did not overcome many of the original problems and induced new failures. Retrospective analyses suggest that failures were induced because programmers did not consider the medical socio-technical communications structures that had evolved around paper records. Transition to electronic records caused breakdowns in the medical socio-technical communications systems; induced inconsistencies in information exchanges among clinics, physicians, hospitals, laboratories, pharmacies, and health insurance providers; and resulted in the incorrect administration of prescriptions, errors in patient care, and unnecessary treatments and surgeries. With the rapid integration of machine intelligence (MI) in medical socio-technical systems, there is a potential to repeat the failures of EMR implementation. To address the MI integration issue, this paper reports research design into the development of a human-intelligent/machine intelligent (HI-MI) EMR core reference ontology around which EMR-MI knowledge can be encoded to form the basis for informed transition to artificially intelligent electronic medical records

    An Attribute Agreement Method for HFACS Inter-Rater Reliability Assessment

    Get PDF
    Inter-rater reliability can be regarded as the degree of agreement among raters on a given item or a circumstance. Multiple approaches have been taken to estimate and improve inter-rater reliability of the United States Department of Defense Human Factors Analysis and Classification System used by trained accident investigators. In this study, three trained instructor pilots used the DoD-HFACS to classify 347 U.S. Air Force Accident Investigation Board (AIB) Class-A reports between the years of 2000 and 2013. The overall method consisted of four steps: (1) train on HFACS definitions, (2) verify rating reliability, (3) rate HFACS reports, and (4) random sample to validate ratings reliability. Attribute agreement analysis was used as the method to assess inter-rater reliability. In the final training verification round, within appraiser agreement ranged 85.28% to 93.25%, each appraiser versus the standard ranged 77.91% to 82.82%, between appraisers 72.39%, and all appraisers versus the standard was 67.48%. Corresponding agreement for the random sample of HFACS rated summaries were within appraiser 78.89% to 92.78% and between appraisers 53.33%, which is consistent with prior studies. This pilot study indicates that the train-verify-rate-validate attribute agreement analysis approach has the potential to aid in improving HFACS ratings reliability and contributing to accurately capturing human factors contributions to aircraft mishaps. Additional full-scale studies will be required to verify and fully develop the proposed methodology
    corecore